آنستیتیوت تکنالوجی معلوماتی و مخابراتی ICTI Information Technology Department Operating System (IT413) 2017-1396
Chapter 4: Process & Thread
Contents: What Is a Process? Relationships between Processes and Programs Child Processes Benefits of Child Process Concurrency and Parallelism Implementing a process Process States and State Transitions Causes of Fundamental State Transitions for a Process Process Context and the Process Control Block Event Handling Sharing, Communication, and Synchronization Between Processes Thread Thread States and State Transitions Advantages of Threads over Processes
What Is a Process? A process is an execution of a program. It actually performs the actions specified in a program or An execution of a program using resources allocated to it. A program is a passive entity that does not perform any actions by itself; it has to be executed if the actions it calls for are to take place. Program P shown in Figure (a) contains declarations of a file info and a variable item, and statements that read values from info, use them to perform some calculations, and print a result before coming to a halt. During execution, instructions of this program use values in its data area and the stack to perform the intended calculations. Figure (b) shows an abstract view of its execution. The instructions, data, and stack of program P constitute its address space. To realize execution of P, the OS allocates memory to accommodate P s address space, allocates a printer to print its results, sets up an arrangement through which P can access file info, and schedules P for execution. The CPU is shown as a lightly shaded box because it is not always executing instructions of P the OS shares the CPU between execution of P and executions of other programs.
Sharing, Communication, and Synchronization Between Processes: Processes of an application need to interact with one another because they work toward a common goal.
What Is a Process? (Cont.)
Relationships between Processes and Programs The OS does not know anything about the nature of a program, including functions and procedures in its code. It knows only what it is told through system calls. The rest is under control of the program. Thus functions of a program may be separate processes, or they may constitute the code part of a single process. Table shows two kinds of relationships that can exist between processes and programs. A one-to-one relationship exists when a single execution of a sequential program is in progress, for example, execution of program P in Figure A many-to-one relationship exists between many processes and a program in two cases: Many executions of a program may be in progress at the same time; processes representing these executions have a many-to-one relationship with the program. During execution, a program may make a system call to request that a specific part of its code should be executed concurrently, i.e., as a separate activity occurring at the same time.
Child Processes: The kernel initiates an execution of a program by creating a process for it. For lack of a technical term for this process, we will call it the primary process for the program execution. The primary process may make system calls as described in the previous section to create other processes these processes become its child processes, and the primary process becomes their parent. A child process may itself create other processes, and so on. The parent child relationships between these processes can be represented in the form of a process tree, which has the primary process as its root. A child process may inherit some of the resources of its parent; it could obtain additional resources during its operation through system calls.
Benefits of Child Process:
Concurrency and Parallelism: Parallelism is the quality of occurring at the same time. Two events are parallel if they occur at the same time, and two tasks are parallel if they are performed at the same time. Concurrency is an illusion of parallelism. Thus, two tasks are concurrent if there is an illusion that they are being performed in parallel, whereas, in reality, only one of them may be performed at any time. In an OS, concurrency is obtained by interleaving operation of processes on the CPU, which creates the illusion that these processes are operating at the same time. Parallelism is obtained by using multiple CPUs, as in a multiprocessor system, and operating different processes on these CPUs.
Implementing a process: Fundamental functions of the kernel for controlling processes: The kernel is activated when an event, which is a situation that requires the kernel s attention, leads to either a hardware interrupt or a system call. The kernel now performs four fundamental functions to control operation of processes.
Implementing a process (Cont.): 1. Context save: Saving CPU state and information concerning resources of the process whose operation is interrupted. 2. Event handling: Analyzing the condition that led to an interrupt, or the request by a process that led to a system call, and taking appropriate actions. 3. Scheduling: Selecting the process to be executed next on the CPU. 4. Dispatching: Setting up access to resources of the scheduled process and loading its saved CPU state in the CPU to begin or resume its operation.
Process States and State Transitions: An operating system uses the notion of a process state to keep track of what a process is doing at any moment. Definition Process state The indicator that describes the nature of the current activity of a process.
Process States and State Transitions (Cont.): A state transition for a process Pi is a change in its state. A state transition is caused by the occurrence of some event such as the start or end of an I/O operation. When the event occurs, the kernel determines its influence on activities in processes, and accordingly changes the state of an affected.
Causes of Fundamental State Transitions for a Process:
Process Context and the Process Control Block: The kernel allocates resources to a process and schedules it for use of the CPU. Accordingly, the kernel s view of a process consists of two parts: Code, data, and stack of the process, and information concerning memory and other resources, such as files, allocated to it. Information concerning execution of a program, such as the process state, the CPU state including the stack pointer, and some other items of information described later in this section. These two parts of the kernel s view are contained in the process context and the process control block (PCB), respectively.this arrangement enables different OS modules to access relevant process-related information conveniently and efficiently.
Process Context and the Process Control Block (Cont.): The process context consists of the following: 1. Address space of the process: The code, data, and stack components of the process 2. Memory allocation information: Information concerning memory areas allocated to a process. This information is used by the memory management unit (MMU) during operation of the process. 3. Status of file processing activities: Information about files being used, such as current positions in the files. 4. Process interaction information: Information necessary to control interaction of the process with other processes, e.g., ids of parent and child processes, and inter-process messages sent to it that have not yet been delivered to it. 5. Resource information: Information concerning resources allocated to the process. 6. Miscellaneous information: Miscellaneous information needed for operation of a process.
Process Context and the Process Control Block (Cont.): The process control block (PCB) of a process contains three kinds of information concerning the process identification information.
Event Handling: The following events occur during the operation of an OS: 1. Process creation event: A new process is created. 2. Process termination event: A process completes its operation. 3. Timer event: The timer interrupt occurs. 4. Resource request event: Process makes a resource request. 5. Resource release event: A process releases a resource. 6. I/O initiation request event: Process wishes to initiate an I/O operation. 7. I/O completion event: An I/O operation completes.
Event Handling (Cont.) 8. Message send event: A message is sent by one process to another. 9. Message receive event: A message is received by a process. 10. Signal send event: A signal is sent by one process to another. 11. Signal receive event: A signal is received by a process. 12. A program interrupt: The current instruction in the running process malfunctions. 13. A hardware malfunction event: A unit in the computer s hardware malfunctions.
Thread Thread An execution of a program that uses the resources of a process. A process creates a thread through a system call. The thread does not have resources of its own, so it does not have a context; it operates by using the context of the process, and accesses the resources of the process through it. We use the phrases thread(s) of a process and parent process of a thread to describe the relationship between a thread and the process whose context it uses. Note that threads are not a substitute for child processes; an application would create child processes to execute different parts of its code, and each child process can create threads to achieve concurrency.
Thread (Cont.) Process Pi has three threads, which are represented by wavy lines inside the circle representing process Pi. Figure shows an implementation arrangement. Process Pi has a context and a PCB. Each thread of Pi is an execution of a program, so it has its own stack and a thread control block (TCB),which is analogous to the PCB and stores the following information: 1. Thread scheduling information thread id, priority and state. 2. CPU state, i.e., contents of the PSW and GPRs. 3. Pointer to PCB of parent process. 4. TCB pointer, which is used to make lists of TCBs for scheduling.
Thread States and State Transitions Barring the difference that threads do not have resources allocated to them, threads and processes are analogous. Hence thread states and thread state transitions are analogous to process states and process state transitions. When a thread is created, it is put in the ready state because its parent process already has the necessary resources allocated to it. It enters the running state when it is dispatched. It does not enter the blocked state because of resource requests, because it does not make any resource requests; however, it can enter the blocked state because of process synchronization requirements.
Advantages of Threads over Processes: